Skip to content

feat: add MiniMax as native AI provider#414

Open
regardtvdvyver wants to merge 3 commits intoBeehiveInnovations:mainfrom
regardtvdvyver:feat/add-minimax-provider
Open

feat: add MiniMax as native AI provider#414
regardtvdvyver wants to merge 3 commits intoBeehiveInnovations:mainfrom
regardtvdvyver:feat/add-minimax-provider

Conversation

@regardtvdvyver
Copy link
Copy Markdown

Summary

Adds MiniMax as a native AI provider following the established X.AI provider pattern. MiniMax offers an OpenAI-compatible API at https://api.minimax.io/v1 with 5 models:

  • MiniMax-M2.5 — 196K context, frontier reasoning
  • MiniMax-M2.5-highspeed — 204K context, fast reasoning
  • MiniMax-M2.1 — 204K context, advanced reasoning
  • MiniMax-M2.1-highspeed — 204K context, fast reasoning
  • MiniMax-M2 — 204K context, capable reasoning

All models support streaming, function calling, and system prompts. No vision support.

Changes

New files:

  • conf/minimax_models.json — Model registry with 5 models and 13 aliases
  • providers/registries/minimax.py — MiniMaxModelRegistry
  • providers/minimax.py — MiniMaxModelProvider

Modified files:

  • providers/shared/provider_type.py — Added MINIMAX enum
  • providers/registry.py — Key mapping + priority order
  • server.py — Registration, key check, startup validation
  • providers/__init__.py — Export
  • providers/registries/__init__.py — Export
  • tools/listmodels.py — Display in model listing
  • .env.example — Configuration docs and examples
  • tests/test_auto_mode_model_listing.py — Test env isolation

Test plan

  • ./code_quality_checks.sh — 890 passed, 0 failed
  • listmodels MCP tool shows MiniMax models and aliases
  • Gemini code review passed (all integration points verified)

Regardt van de Vyver added 2 commits March 1, 2026 16:04
Add MiniMax (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed, M2) as a
registry-backed OpenAI-compatible provider. Follows the same pattern
as the X.AI provider with JSON model config, registry loader, and
full alias support.
- Add MiniMax to listmodels provider display
- Add MiniMax to startup model restriction validation
- Fix test env isolation for MINIMAX_API_KEY leakage
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces MiniMax as a new native AI provider, expanding the platform's capabilities to support additional large language models. The changes involve adding specific MiniMax model definitions and integrating them into the existing provider architecture, allowing users to configure and utilize MiniMax models for various AI tasks. This enhancement broadens the range of available AI services without disrupting current functionalities.

Highlights

  • New MiniMax AI Provider Integration: MiniMax has been added as a native AI provider, following the established pattern for other AI providers like OpenAI and X.AI. This includes new files for its model registry and provider implementation.
  • Model Support and Capabilities: The integration supports five MiniMax models (M2.5, M2.5-highspeed, M2.1, M2.1-highspeed, M2) with various context windows and reasoning capabilities. All models support streaming, function calling, and system prompts, but lack vision support.
  • System-wide Integration: The MiniMax provider is fully integrated into the system's core, including environment variable configuration, provider registration, model listing tools, and test suites, ensuring seamless operation alongside existing providers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .env.example
    • Updated comments to include MiniMax as a native API option.
    • Added MINIMAX_API_KEY example.
    • Included supported MiniMax models and their shorthands in the model configuration section.
    • Added MINIMAX_ALLOWED_MODELS examples for model restriction configuration.
  • conf/minimax_models.json
    • Added new file defining metadata for MiniMax models, including names, aliases, capabilities, and intelligence scores.
  • providers/init.py
    • Imported MiniMaxModelProvider.
    • Exported MiniMaxModelProvider in __all__.
  • providers/minimax.py
    • Added new file implementing the MiniMaxModelProvider class, extending OpenAICompatibleProvider and RegistryBackedProviderMixin.
  • providers/registries/init.py
    • Imported MiniMaxModelRegistry.
    • Exported MiniMaxModelRegistry in __all__.
  • providers/registries/minimax.py
    • Added new file implementing the MiniMaxModelRegistry class, inheriting from CapabilityModelRegistry.
  • providers/registry.py
    • Added ProviderType.MINIMAX to the PROVIDER_PRIORITY_ORDER list.
    • Added MINIMAX_API_KEY to the _API_KEY_MAP for API key retrieval.
  • providers/shared/provider_type.py
    • Added MINIMAX enum member to ProviderType.
  • server.py
    • Added MINIMAX_API_KEY to the list of environment variables checked at startup.
    • Imported MiniMaxModelProvider.
    • Added logic to detect and log the presence of MINIMAX_API_KEY.
    • Registered MiniMaxModelProvider if MINIMAX_API_KEY is present.
    • Included MINIMAX_API_KEY in the list of required API keys for error messages.
    • Updated the log message for native APIs to include MiniMax.
    • Added ProviderType.MINIMAX to the list of provider types validated for restrictions.
  • simulator_tests/test_chat_simple_validation.py
    • Removed an extraneous blank line.
  • simulator_tests/test_conversation_chain_validation.py
    • Removed an extraneous blank line.
  • simulator_tests/test_cross_tool_comprehensive.py
    • Removed an extraneous blank line.
  • simulator_tests/test_ollama_custom_url.py
    • Removed an extraneous blank line.
  • simulator_tests/test_openrouter_fallback.py
    • Removed an extraneous blank line.
  • simulator_tests/test_openrouter_models.py
    • Removed an extraneous blank line.
  • simulator_tests/test_xai_models.py
    • Removed an extraneous blank line.
  • tests/test_auto_mode_model_listing.py
    • Added MINIMAX_API_KEY to environment variables to be deleted for test isolation.
    • Added MINIMAX_ALLOWED_MODELS to environment variables to be set for test isolation.
    • Included MINIMAX_API_KEY in the list of variables to delete for test cleanup.
    • Added MINIMAX_ALLOWED_MODELS to the list of variables to delete for test cleanup.
  • tests/test_directory_expansion_tracking.py
    • Refactored multiline string literals to use implicit string concatenation for cleaner syntax.
  • tests/test_docker_implementation.py
    • Refactored multiline string literals to use implicit string concatenation for cleaner syntax.
  • tests/test_minimax_provider.py
    • Added new file containing unit tests for the MiniMaxModelProvider.
  • tests/test_prompt_regression.py
    • Refactored multiline string literals to use implicit string concatenation for cleaner syntax.
  • tools/listmodels.py
    • Updated the tool description to include MiniMax as a supported provider.
    • Added MiniMax to the PROVIDER_INFO dictionary for display in the model listing tool.
Activity
  • The author ran ./code_quality_checks.sh, which passed all 890 checks.
  • The listmodels MCP tool was verified to correctly display MiniMax models and their aliases.
  • A Gemini code review was conducted, and all integration points were verified.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively adds MiniMax as a native AI provider, following the established patterns in the codebase. The changes include a new model registry, provider implementation, and necessary updates to server configuration and tests. My review includes a couple of suggestions to refactor small parts of the new code to improve clarity and reduce duplication. Overall, this is a well-executed feature addition.

Comment thread providers/minimax.py
Comment on lines +45 to +82
def get_preferred_model(self, category: "ToolModelCategory", allowed_models: list[str]) -> Optional[str]:
"""Get MiniMax's preferred model for a given category from allowed models.

Args:
category: The tool category requiring a model
allowed_models: Pre-filtered list of models allowed by restrictions

Returns:
Preferred model name or None
"""
from tools.models import ToolModelCategory

if not allowed_models:
return None

if category == ToolModelCategory.EXTENDED_REASONING:
# Prefer M2.5 for advanced reasoning tasks
if self.PRIMARY_MODEL in allowed_models:
return self.PRIMARY_MODEL
if self.FAST_MODEL in allowed_models:
return self.FAST_MODEL
return allowed_models[0]

elif category == ToolModelCategory.FAST_RESPONSE:
# Prefer M2.5-highspeed for speed
if self.FAST_MODEL in allowed_models:
return self.FAST_MODEL
if self.PRIMARY_MODEL in allowed_models:
return self.PRIMARY_MODEL
return allowed_models[0]

else: # BALANCED or default
# Prefer M2.5 for balanced use
if self.PRIMARY_MODEL in allowed_models:
return self.PRIMARY_MODEL
if self.FAST_MODEL in allowed_models:
return self.FAST_MODEL
return allowed_models[0]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for selecting a preferred model is a bit repetitive across the different categories. The EXTENDED_REASONING and BALANCED categories have identical logic, and the FAST_RESPONSE category just swaps the preference order. You can simplify this by defining the preference order in a list and then iterating through it to make the code more concise and maintainable.

    def get_preferred_model(self, category: "ToolModelCategory", allowed_models: list[str]) -> Optional[str]:
        """Get MiniMax's preferred model for a given category from allowed models.

        Args:
            category: The tool category requiring a model
            allowed_models: Pre-filtered list of models allowed by restrictions

        Returns:
            Preferred model name or None
        """
        from tools.models import ToolModelCategory

        if not allowed_models:
            return None

        if category == ToolModelCategory.FAST_RESPONSE:
            preference_order = [self.FAST_MODEL, self.PRIMARY_MODEL]
        else:  # BALANCED, EXTENDED_REASONING, or default
            preference_order = [self.PRIMARY_MODEL, self.FAST_MODEL]

        for model in preference_order:
            if model in allowed_models:
                return model

        return allowed_models[0]

Comment thread server.py
Comment on lines +467 to +471
minimax_key = get_env("MINIMAX_API_KEY")
if minimax_key and minimax_key != "your_minimax_api_key_here":
valid_providers.append("MiniMax")
has_native_apis = True
logger.info("MiniMax API key found - MiniMax models available")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition minimax_key and minimax_key != "your_minimax_api_key_here" is used here and again later when registering the provider. To improve readability and avoid repetition, you could store the result of this check in a boolean variable (e.g., is_minimax_configured) and reuse it in both places.

Suggested change
minimax_key = get_env("MINIMAX_API_KEY")
if minimax_key and minimax_key != "your_minimax_api_key_here":
valid_providers.append("MiniMax")
has_native_apis = True
logger.info("MiniMax API key found - MiniMax models available")
minimax_key = get_env("MINIMAX_API_KEY")
is_minimax_configured = minimax_key and minimax_key != "your_minimax_api_key_here"
if is_minimax_configured:
valid_providers.append("MiniMax")
has_native_apis = True
logger.info("MiniMax API key found - MiniMax models available")

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 16f258c26a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread providers/registry.py
ProviderType.OPENAI, # Direct OpenAI access
ProviderType.AZURE, # Azure-hosted OpenAI deployments
ProviderType.XAI, # Direct X.AI GROK access
ProviderType.MINIMAX, # Direct MiniMax access
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Apply MiniMax allowlist before fallback model selection

By adding ProviderType.MINIMAX to PROVIDER_PRIORITY_ORDER, auto-mode can now prefer MiniMax models in get_preferred_fallback_model, but MINIMAX_ALLOWED_MODELS is not wired into ModelRestrictionService.ENV_VARS (utils/model_restrictions.py), so registry-level filtering still treats all MiniMax models as allowed. In deployments that restrict MiniMax to a subset (for example only m2), the registry can still select MiniMax-M2.5, and the request then fails later when provider-level checks reject that model, instead of selecting an actually allowed model up front.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant